由于算法预测对人类的影响增加,模型解释性已成为机器学习(ML)的重要问题。解释不仅可以帮助用户了解为什么ML模型做出某些预测,还可以帮助用户了解这些预测如何更改。在本论文中,我们研究了从三个有利位置的ML模型的解释性:算法,用户和教学法,并为解释性问题贡献了一些新颖的解决方案。
translated by 谷歌翻译
当使用临床医生或人工智能(AI)系统的医学图像进行诊断时,重要的是图像具有高质量。当图像质量低时,产生图像的体检通常需要重做。在远程医疗中,一个普遍的问题是,只有在患者离开诊所后才标记质量问题,这意味着他们必须返回才能重做考试。对于居住在偏远地区的人们来说,这可能是特别困难的,他们在巴西的数字医疗组织Portemedicina占了大部分患者。在本文中,我们报告了有关(i)实时标记和解释低质量医学图像的AI系统的正在进行的工作,(ii)采访研究,以了解使用AI系统的利益相关者的解释需求在OurCompany和(iii)纵向用户研究设计,旨在检查包括对我们诊所中技术人员工作流程的解释的效果。据我们所知,这将是评估XAI方法对最终用户的影响的首次纵向研究 - 使用AI系统但没有AI特定专业知识的利益相关者。我们欢迎对我们的实验设置的反馈和建议。
translated by 谷歌翻译
在这项工作中,我们向阿姆斯特丹大学的人工智能(_MACE-AI)的技术,研究生,保密性和透明度的技术,审查,保密性和透明度的设置,它通过再现性的镜头教导了概念。该课程的焦点是基于从顶级AI会议的现有事实-AI算法的基础项目,并撰写关于他们的经历的报告。在课程的第一次迭代中,我们创建了一个具有来自组项目的代码实现的开源存储库。在第二次迭代中,我们鼓励学生将他们的小组项目提交给机器学习再现性挑战,这导致了我们课程所接受的9个报告。我们反映了我们在两个学年课程教学的经验,其中一年恰逢全球大流行,并通过研究生级AI计划的可重复性提出了教学局面的指导。我们希望这可以成为教师在未来在其大学建立类似课程的有用资源。
translated by 谷歌翻译
由于算法决策对人类的影响增加,模型解释性已成为机器学习(ML)的重要问题。反事实解释可以帮助用户不仅可以理解为什么ML模型做出某些决定,还可以改变这些决定。我们框架以梯度为基础的优化任务查找反事实解释的问题,并扩展了只能应用于可微分模型的先前工作。为了适应非微弱的模型,例如树集合,我们在优化框架中使用概率模型近似。我们介绍了一种近似技术,可以有效地查找原始模型的预测的反事实解释,并表明我们的反事实示例明显更接近原始实例,而不是由专门为树集合设计的其他方法产生的实例。
translated by 谷歌翻译
Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence. However, training deep SNNs from scratch or converting deep artificial neural networks to SNNs without loss of performance has been a challenge. Here we propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron. For our constructive proof, we assume that an arbitrary multi-layer ReLU network with or without convolutional layers, batch normalization and max pooling layers was trained to high performance on some training set. Furthermore, we assume that we have access to a representative example of input data used during training and to the exact parameters (weights and biases) of the trained ReLU network. The mapping from deep ReLU networks to SNNs causes zero percent drop in accuracy on CIFAR10, CIFAR100 and the ImageNet-like data sets Places365 and PASS. More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance.
translated by 谷歌翻译
Recent work has reported that AI classifiers trained on audio recordings can accurately predict severe acute respiratory syndrome coronavirus 2 (SARSCoV2) infection status. Here, we undertake a large scale study of audio-based deep learning classifiers, as part of the UK governments pandemic response. We collect and analyse a dataset of audio recordings from 67,842 individuals with linked metadata, including reverse transcription polymerase chain reaction (PCR) test outcomes, of whom 23,514 tested positive for SARS CoV 2. Subjects were recruited via the UK governments National Health Service Test-and-Trace programme and the REal-time Assessment of Community Transmission (REACT) randomised surveillance survey. In an unadjusted analysis of our dataset AI classifiers predict SARS-CoV-2 infection status with high accuracy (Receiver Operating Characteristic Area Under the Curve (ROCAUC) 0.846 [0.838, 0.854]) consistent with the findings of previous studies. However, after matching on measured confounders, such as age, gender, and self reported symptoms, our classifiers performance is much weaker (ROC-AUC 0.619 [0.594, 0.644]). Upon quantifying the utility of audio based classifiers in practical settings, we find them to be outperformed by simple predictive scores based on user reported symptoms.
translated by 谷歌翻译
Since early in the coronavirus disease 2019 (COVID-19) pandemic, there has been interest in using artificial intelligence methods to predict COVID-19 infection status based on vocal audio signals, for example cough recordings. However, existing studies have limitations in terms of data collection and of the assessment of the performances of the proposed predictive models. This paper rigorously assesses state-of-the-art machine learning techniques used to predict COVID-19 infection status based on vocal audio signals, using a dataset collected by the UK Health Security Agency. This dataset includes acoustic recordings and extensive study participant meta-data. We provide guidelines on testing the performance of methods to classify COVID-19 infection status based on acoustic features and we discuss how these can be extended more generally to the development and assessment of predictive methods based on public health datasets.
translated by 谷歌翻译
The UK COVID-19 Vocal Audio Dataset is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs, exhalations, and speech were collected in the 'Speak up to help beat coronavirus' digital survey alongside demographic, self-reported symptom and respiratory condition data, and linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,794 of 72,999 participants and 24,155 of 25,776 positive cases. Respiratory symptoms were reported by 45.62% of participants. This dataset has additional potential uses for bioacoustics research, with 11.30% participants reporting asthma, and 27.20% with linked influenza PCR test results.
translated by 谷歌翻译
This paper describes the 5th edition of the Predicting Video Memorability Task as part of MediaEval2022. This year we have reorganised and simplified the task in order to lubricate a greater depth of inquiry. Similar to last year, two datasets are provided in order to facilitate generalisation, however, this year we have replaced the TRECVid2019 Video-to-Text dataset with the VideoMem dataset in order to remedy underlying data quality issues, and to prioritise short-term memorability prediction by elevating the Memento10k dataset as the primary dataset. Additionally, a fully fledged electroencephalography (EEG)-based prediction sub-task is introduced. In this paper, we outline the core facets of the task and its constituent sub-tasks; describing the datasets, evaluation metrics, and requirements for participant submissions.
translated by 谷歌翻译
Can we leverage the audiovisual information already present in video to improve self-supervised representation learning? To answer this question, we study various pretraining architectures and objectives within the masked autoencoding framework, motivated by the success of similar methods in natural language and image understanding. We show that we can achieve significant improvements on audiovisual downstream classification tasks, surpassing the state-of-the-art on VGGSound and AudioSet. Furthermore, we can leverage our audiovisual pretraining scheme for multiple unimodal downstream tasks using a single audiovisual pretrained model. We additionally demonstrate the transferability of our representations, achieving state-of-the-art audiovisual results on Epic Kitchens without pretraining specifically for this dataset.
translated by 谷歌翻译